command activations for emotionnet (MathWorks Inc)
Structured Review

Command Activations For Emotionnet, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/command activations for emotionnet/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
Images
1) Product Images from "A computational probe into the behavioral and neural markers of atypical facial emotion processing in autism"
Article Title: A computational probe into the behavioral and neural markers of atypical facial emotion processing in autism
Journal: bioRxiv
doi: 10.1101/2021.03.24.436640
Figure Legend Snippet: A. ANN models of the primate ventral stream (typically comprising V1, V2, V4 and IT like layers) can be trained to predict human facial emotion judgments. This involves building a regression model, i.e., determining the weights based on the model layer activations (as the predictor) to predict the image ground truth (“level of happiness”) on a set of training images, and then testing the predictions of this model on held-out images. B. An ANN model’s predicted psychometric curves (e.g., AlexNet, shown here) show the proportion of trials judged as “happy” as a function of facial emotion morph levels ranging from 0% happy (100% fearful; left) to 100% happy (0% fearful; right). This curve demonstrates that activations of ANN layers (layer ‘fc7’ that corresponds to the “model-IT” layer) can be successfully trained to predict facial emotions. C. Comparison of ANN’s image-level behavioral patterns with the behavior measured in Controls (x-axis) and IwA (y-axis). Four ANNs (with 5 models each generated from different layers of the ANNs are shown here in different colors. ANN predictions better match the behavior measured in the Controls compared to IwA. The correlation values (x and y axes) were corrected by the noise estimates per human population so that the differences are not due to differences in noise-levels in measurements across the IwA and Control subject pools. The dot size refers to the degree of discrepancy between ANN predictivity of Controls vs. IwA. D . A comparison of the ANN predictivity (results from AlexNet shown here) of behavior measured in IwA vs. Controls as function of model layers (convolutional (cnv) layers 1,3,4, and 5 and the fully connected layer 7, ‘fc7’ -- that approximately corresponds to the ventral stream cortical hierarchy). The difference between the ANN’s predictivity of behavior in IwA and Controls increases with depth and is referred to as Δ . E. Discriminability index (d’; ability to discriminate between image-level behavioral patterns measured in IwA vs. Controls ; see Methods) as a function of model layers (all four tested models shown separately in individual panels). The difference in ANN predictivity between Controls and IwA was largest at the deeper (more IT-like) layers of the models instead of earlier (more V1, V2, and V4-like) layers. Errorbars denote bootstrap confidence intervals. Facial images shown in this figure are morphed and processed version of the original face images. These images have full re-use permission.
Techniques Used: Generated